A Capstone Project
2025-04-22
Fine-Tuning an LLM for Chrysler Crossfire Troubleshooting
Dre Dyson University of West Florida
Brief Intro: “Hi everyone. My name is Dre Dyson… today, I want to share my Capstone project with you. It’s about fine-tuning a Large Language Model… that focuses on troubleshooting and repairing the Chrysler Crossfire.”
Chrysler Crossfire
4-step process graphic: Collect, Generate, Train, Test
| Metrics | Fine-tuning Dataset |
|---|---|
| No. of dialogues | 8385 |
| Total no. of turns | 79663 |
| Avg. turns per dialogue | 9.5 |
| Avg. tokens per turn | 41.62 |
| Total unique tokens | 70165 |
Table 1: Summarizes the characteristics of the conversational dataset generated by ATK.
1. Training Validation: Did the Model Learn?
train/loss plot shows how well the model absorbed the Crossfire data during fine-tuning. Training Loss Curve
2. Testing: How Did It Perform vs. Base Models?
Comparison Table:
| Question Category | Battery Type | Front Wheel Size | Headlight Model | Rear Wheel Size | Throttle Reset Proc. |
|---|---|---|---|---|---|
| Model | |||||
| Chrysler Crossfire Model | Correct | Correct | Correct | Correct | Correct |
| Llama 3.1 8B | Incorrect | Incorrect | Incorrect | Incorrect | Incorrect |
| Llama 3.1 70B | Incorrect | Incorrect | Incorrect | Incorrect | Incorrect |
| Llama 3.1 405B | Incorrect | Correct | Incorrect | Incorrect | Incorrect |
Accuracy Table:
| Correct Answers | Total Questions | Accuracy (%) | |
|---|---|---|---|
| Model | |||
| Chrysler Crossfire Model | 5 | 5 | 100 |
| Llama 3.1 8B | 0 | 5 | 0 |
| Llama 3.1 70B | 0 | 5 | 0 |
| Llama 3.1 405B | 1 | 5 | 20 |